Goto

Collaborating Authors

 interpretable latent variable


Editable Stain Transformation Of Histological Images Using Unpaired GANs

Sloboda, Tibor, Hudec, Lukáš, Benešová, Wanda

arXiv.org Artificial Intelligence

Double staining in histopathology is done to help identify tissue features and cell types differentiated between two tissue samples using two different dyes. In the case of metaplastic breast cancer, H&E and P63 are often used in conjunction for diagnosis. However, P63 tends to damage the tissue and is prohibitively expensive, motivating the development of virtual staining methods, or methods of using artificial intelligence in computer vision for diagnostic strain transformation. In this work, we present results of the new xAI-CycleGAN architecture's capability to transform from H&E pathology stain to the P63 pathology stain on samples of breast tissue with presence of metaplastic cancer. The architecture is based on Mask CycleGAN and explainability-enhanced training, and further enhanced by structure-preserving features, and the ability to edit the output to further bring generated samples to ground truth images. We demonstrate its ability to preserve structure well and produce superior quality images, and demonstrate the ability to use output editing to approach real images, and opening the doors for further tuning frameworks to perfect the model using the editing approach. Additionally, we present the results of a survey conducted with histopathologists, evaluating the realism of the generated images through a pairwise comparison task, where we demonstrate the approach produced high quality images that sometimes are indistinguishable from ground truth, and overall our model outputs get a high realism rating.


xAI-CycleGAN, a Cycle-Consistent Generative Assistive Network

Sloboda, Tibor, Hudec, Lukáš, Benešová, Wanda

arXiv.org Artificial Intelligence

In the domain of unsupervised image-to-image transformation using generative transformative models, CycleGAN has become the architecture of choice. One of the primary downsides of this architecture is its relatively slow rate of convergence. In this work, we use discriminator-driven explainability to speed up the convergence rate of the generative model by using saliency maps from the discriminator that mask the gradients of the generator during backpropagation, based on the work of Nagisetty et al., and also introducing the saliency map on input, added onto a Gaussian noise mask, by using an interpretable latent variable based on Wang M.'s Mask CycleGAN. This allows for an explainability fusion in both directions, and utilizing the noise-added saliency map on input as evidence-based counterfactual filtering. This new architecture has much higher rate of convergence than a baseline CycleGAN architecture while preserving the image quality.


Interpretable Latent Variables in Deep State Space Models

Wu, Haoxuan, Matteson, David S., Wells, Martin T.

arXiv.org Machine Learning

We introduce a new version of deep state-space models (DSSMs) that combines a recurrent neural network with a state-space framework to forecast time series data. The model estimates the observed series as functions of latent variables that evolve non-linearly through time. Due to the complexity and non-linearity inherent in DSSMs, previous works on DSSMs typically produced latent variables that are very difficult to interpret. Our paper focus on producing interpretable latent parameters with two key modifications. First, we simplify the predictive decoder by restricting the response variables to be a linear transformation of the latent variables plus some noise. Second, we utilize shrinkage priors on the latent variables to reduce redundancy and improve robustness. These changes make the latent variables much easier to understand and allow us to interpret the resulting latent variables as random effects in a linear mixed model. We show through two public benchmark datasets the resulting model improves forecasting performances.